26 research outputs found

    Two-point coordinate rings for GK-curves

    Full text link
    Giulietti and Korchm\'aros presented new curves with the maximal number of points over a field of size q^6. Garcia, G\"uneri, and Stichtenoth extended the construction to curves that are maximal over fields of size q^2n, for odd n >= 3. The generalized GK-curves have affine equations x^q+x = y^{q+1} and y^{q^2}-y^q = z^r, for r=(q^n+1)/(q+1). We give a new proof for the maximality of the generalized GK-curves and we outline methods to efficiently obtain their two-point coordinate ring.Comment: 16 page

    Monomial embeddings of the Klein curve

    Get PDF
    AbstractThe Klein curve is defined by the smooth plane model X3Y+Y3Z+Z3X=0. We give all embeddings in higher dimension with a linear action of the automorphism group. The curve has 24 flexpoints, i.e. points where the tangent intersects with multiplicity three. For even characteristic, the embeddings yield interesting configurations of the flexpoints and good linear codes

    Decoding Codes from Curves and Cyclic Codes

    Get PDF
    4. with R. K"otter, "Error-locating pairs for cyclic codes, " preprint Eindhoven-Link"oping, submitted for publication, March 1993

    Weight distributions of geometric Goppa codes

    No full text
    Abstract. The in general hard problem of computing weight distributions of linear codes is considered for the special class of algebraic-geometric codes, defined by Goppa in the early eighties. Known results restrict to codes from elliptic curves. We obtain results for curves of higher genus by expressing the weight distributions in terms of L-series. The results include general properties of weight distributions, a method to describe and compute weight distributions, and worked out examples for curves of genus two and three. 1

    On erasure decoding of AG-codes

    No full text
    We present a scheme for erasure decoding of AG-codes of complexity O(n 2). This improves on methods involving Fourier transforms. The trivial scheme for erasure decoding of AG-codes (Algebraic Geometry codes in full, or Geometric Goppa codes) solves a system of linear equations. The scheme is of complexity O(n 3), where n denotes the codelength. Fast schemes, that use a Fourier transform of all syndromes, improve on the complexity. For codes over GF (q) defined with curves in r-dimensional affine space, there are (q − 1) r syndromes to be computed. The transformation itself is of complexity O(nq r) [1]. In general, q r> n. The determination of syndromes is computationally equivalent to the determination of the message symbols. We give a scheme for the computation of the message symbols. It avoids the computation of the syndromes and the use of the Fourier transform. The scheme requires 3kn field multiplications, with k the dimension of the code. Notation 1 The notation will be as follows. Let C be a linear code of type [n, k], with generator matrix G and parity check matrix H. Let m = (mi), c = (ci), e = (ei), y = (yi) denote a message, the encoded message, an error vector, and the received message respectively. The vectors are related via c = mG, y = c + e. Let it be known that errors occurred only at the coordinates I ⊂ {1, 2,..., n}. For a fixed code C, erasure decoding concerns the determination of the message m, given the received message y and the set of unreliable positions I. In algebraic decoding, the set of unreliable positions I will in general be given as the set of zeros of an error-locating vector [2]. Definition 2 By u∗e = = (uiei) we denote the componentwise product of vectors u and e. A vector u is called error-locating if it has the support of e among its zeros. Equivalently, if u ∗ e = 0. We recall two known solutions to erasure decoding. Both compute the error vector first. Proposition 3 The error vector e can be computed from the system of linear equations (1.1) He T = Hy T, ei = 0, for i � ∈ I. Alternatively, for a regular square matrix ¯ H of size n ′ ≥ n, and a complete syndrome vector s T = ¯ He T, it can be obtained as (1.2) e T = ¯ H −1 s T. The computation of the message vector then proceeds in both cases via (2) Compute c = y − e. (3) Compute m, with c = mG
    corecore